9 research outputs found

    Designing mobile interactions for the ageing populations

    Get PDF
    We are concurrently witnessing two significant shifts: mobiles are becoming the most used computing device; and older people are becoming the largest demographic group. However, despite the recent increase in related CHI publication, older adults continue to be underrepresented in HCI research as well as commercially, further widening the digital divide they face and hampering their social participation. This workshop aims to increase the momentum for such research within CHI and related fields such as gerontechnology. We plan to create a space for discussing and sharing principles and strategies to design and evaluate mobile user interfaces for the aging population. We thus welcome contributions to empirical studies, theories, design and evaluation of mobile interfaces for older adults

    Rethinking mobile interfaces for older adults

    Get PDF
    This SIG advances the study of mobile user interfaces for the aging population. The topic is timely, as the mobile device has become the most widely used computer terminal and at the same time the number of older people will soon exceed the number of children worldwide. However, most HCI research addresses younger adults and has had little impact on older adults. Some design trends, like the mantra “smaller is smarter”, contradict the needs of older users. Developments like this may diminish their ability to access information and participate in society. This can lead to further isolation (social and physical) of older adults and increased widening of the digital divide. This SIG aims to discuss mobile interfaces for older adults. The SIG has three goals: (i) to map the state-of-art, (ii) to build a community gathering experts from related areas, and (iii) to raise awareness within the SIGCHI community. The SIG will be open to all at CHI

    Convolutional Neural Network-Based Low-Powered Wearable Smart Device for Gait Abnormality Detection

    No full text
    Gait analysis is a powerful technique that detects and identifies foot disorders and walking irregularities, including pronation, supination, and unstable foot movements. Early detection can help prevent injuries, correct walking posture, and avoid the need for surgery or cortisone injections. Traditional gait analysis methods are expensive and only available in laboratory settings, but new wearable technologies such as AI and IoT-based devices, smart shoes, and insoles have the potential to make gait analysis more accessible, especially for people who cannot easily access specialized facilities. This research proposes a novel approach using IoT, edge computing, and tiny machine learning (TinyML) to predict gait patterns using a microcontroller-based device worn on a shoe. The device uses an inertial measurement unit (IMU) sensor and a TinyML model on an advanced RISC machines (ARM) chip to classify and predict abnormal gait patterns, providing a more accessible, cost-effective, and portable way to conduct gait analysis

    Designing Mid-Air TV Gestures for Blind People Using User and Choice-Based Elicitation Approaches

    No full text
    Mid-air gestures enable intuitive and natural interactions. However, few studies have investigated the use of mid-air gestures for blind people. TV interactions are one promising use of mid-air gestures for blind people, as “listening” to TV is one of their most common activities. Thus, we investigated mid-air TV gestures for blind people through two studies. Study 1 used a user-elicitation approach where blind people were asked to define gestures given a set of commands. Then, we present a classification of gesture types and the frequency of body parts usage. Nevertheless, our participants had di_culty imagining gestures for some commands. Thus, we conducted Study 2 that used a choice-based elicitation approach where the participants selected their favorite gesture from a predefined list of choices. We found that providing choices help guide users to discover suitable gestures for unfamiliar commands. We discuss concrete design guidelines for mid-air TV gestures for blind people

    A Statistical Approach to Classify Nationality of Name

    No full text
    Name entities (NEs), especially personal names, are very important components in interpreting some kinds of text documents e.g. news. To extract personal names efficiently, statistical language models are required to denote characteristics of personal names. Among these characteristics, nationality of a name is a useful source for interpreting the text document. Automatically inferencing nationality from a name also directly assists a user to gain more information from the name. In this paper, we therefore propose a statistical approach to identify nationality of names written in Thai. Extracting features from decomposed personal names, their probabilistic bigram and tri-gram models are used with naive Bayesian classification to assign the most proper class for a name. To evaluate the proposed approach, a number of experiments are conducted on real-world data. The experimental results show that our approach works efficiently with about 94 % accuracy.

    Modelling Learning of New Keyboard Layouts

    No full text
    Predicting how users learn new or changed interfaces is a long-standing objective in HCI research. This paper contributes to understanding of visual search and learning in text entry. With a goal of explaining variance in novices' typing performance that is attributable to visual search, a model was designed to predict how users learn to locate keys on a keyboard: initially relying on visual short-term memory but then transitioning to recall-based search. This allows predicting search times and visual search patterns for completely and partially new layouts. The model complements models of motor performance and learning in text entry by predicting change in visual search patterns over time. Practitioners can use it for estimating how long it takes to reach the desired level of performance with a given layout.Peer reviewe

    Ability-based optimization of touchscreen interactions

    No full text
    | openaire: EC/H2020/637991/EU//COMPUTEDAbility-based optimization is a computational approach for improving interface designs for users with sensorimotor and cognitive impairments. Designs are created by an optimizer, evaluated against task-specific cognitive models, and adapted to individual abilities. The approach does not necessitate extensive data collection and could be applied both automatically and manually by users, designers, or caretakers. As a first step, the authors present optimized touchscreen layouts for users with tremor and dyslexia that potentially improve text-entry speed and reduce error.Peer reviewe

    Approaching Engagement towards Human-Engaged Computing

    No full text
    Debates regarding the nature and role of HCI research and practice have intensified in recent years, given the ever increasingly intertwined relations between humans and technologies. The framework of Human-Engaged Computing (HEC) was proposed and developed over a series of scholarly workshops to complement mainstream HCI models by leveraging synergy between humans and computers with its key notion of "engagement". Previous workshop meetings found "engagement" to be a constructive and extendable notion through which to investigate synergized human-computer relationships, but many aspects concerning the core concept remain underexplored. This SIG aims to tackle the notion of engagement considered through discussions of four thematic threads. It will bring together HCI practitioners and researchers from different disciplines including Humanities, Design, Positive Psychology, Communication and Media Studies, Neuroscience, Philosophy and Eastern Studies, to share and discuss relevant knowledge and insights and identify new research opportunities and future directions.Peer reviewe
    corecore